https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • a

    Aditya Verma

    08/17/2025, 7:36 PM
    Below is my config for controller I am going to deploy this on EKS . Let me know if there is anything missing and I am actually doing the correct way to store the config for the deepstorage.
    Copy code
    controller:
      persistence:
        enabled: true
        accessMode: ReadWriteOnce
        size: 1Gi                 # Small since it's only a staging area
        mountPath: /var/pinot/controller/data
        storageClass: gp3
        extraVolumes: []
        extraVolumeMounts: []
    
      data:
        dir: /var/pinot/controller/data
    
      # S3 DeepStorage config
      config:
        controller.data.dir: "/var/pinot/controller/data"
        pinot.controller.storage.factory.class.s3: org.apache.pinot.plugin.filesystem.S3PinotFS
        pinot.controller.local.temp.dir: "/var/pinot/temp"
    
        # S3 bucket path where segments are permanently stored
        pinot.controller.segment.fetcher.protocols: s3
        pinot.controller.segment.fetcher.s3.class: org.apache.pinot.plugin.filesystem.S3PinotFS
        pinot.controller.segment.uploader.class: org.apache.pinot.plugin.filesystem.S3PinotFS
        pinot.controller.segment.store.uri: "<s3://my-pinot-segments>"
  • r

    Rajat

    08/18/2025, 11:47 AM
    Hi team. I have ran a cluster in pinot with deep_store. Segments were going in the S3 smoothly, the retention provided at that time was 7 days. the server crashed due to heavy load on server during festival. We decided to reduce the retention from 7 to 4 days. After doing that I reloaded all the segments. How long will it take the pinot to resume ingestion? Basically how long will it take to delete older segments? @Xiang Fu @Mayank
    m
    x
    a
    • 4
    • 12
  • b

    Balkar Lathwal

    08/18/2025, 3:18 PM
    Hi, I have added a new multivalued string field in pinot table. It is getting ingested as expected and I am able to query it using Pinot Query Web Interface. But when I try the same query using a Java/Python client it is throwing following exception.
    x
    • 2
    • 47
  • m

    mat

    08/18/2025, 5:51 PM
    Is there a way to get a multi valued string field to contain a zero length array ? So if the null support is set to store 'null' for the MV string field I get ['null']. If I do
    field is not null
    the indexed null correctly registers the field as null. But the length of the array is still 1 and still contains a literal string. This is making string matching a little wonky as there is always a value in the array to be matched. So if a user tried to check if an array contains the text 'null' it returns a value instead of nothing. Is there a way to set my default for MV fields to an empty array to avoid this string matching issue? I know I can just add
    field is not null
    to all my queries, but I want to see if there is a better answer I am not seeing.
    n
    m
    +2
    • 5
    • 36
  • a

    Aditya Verma

    08/18/2025, 9:11 PM
    can anyone explain me why do I need the below Config ? Although I understand it but how and why would dedup metadata keep on increasing as my primary key would be the same. Didnt quite really get the concept here.
    Copy code
    { 
     ...
      "dedupConfig": { 
            "dedupEnabled": true, 
            "hashFunction": "NONE",
            "dedupTimeColumn": "mtime",
            "metadataTTL": 30000
       }, 
     ...
    }
  • s

    San Kumar

    08/19/2025, 6:19 AM
    is merge rollup support for OFFLINE tables, on APPEND only?is it support on REFRESH. can we schedule MergeRollupTask with a cron expression.Can you please help me on this (edited)
  • a

    Alexander Maniates

    08/19/2025, 7:09 PM
    Hello, I have been trying to adapt our batch ingestion pipeline to start using
    createMetadataTarGz
    for segment creation and
    preferMetadataTarGz
    for the metadata push job. In testing this, I ran into errors on the controller side where the DefaultMetadataExtractor expects a segment tar ball to contain a
    /v3
    subdir, so it fails when it receives just the slim tar ball which only contains
    [metadata.properties, creation.meta]
    and not the
    /v3/
    subDir [1] [2] [3] [4] I have written up a fix that we are testing on our end: https://github.com/apache/pinot/pull/16635 I am curious if we might be doing something wrong on our end, or if folks are using a custom HadoopSegmentCreationMapper that is making a different structured tar ball for the segment metadata? Or are folks implementing a different
    MetadataExtractor
    for their own use? We are still on 1.2.0, but it looks like the code hasn't changed much around this.
    • 1
    • 1
  • j

    Jonathan Baxter

    08/21/2025, 5:08 PM
    Hi all, I've set up daily batch ingestion for a table that I have configured with
    "segmentIngestionType": "REFRESH",
    , but as I'm generating the data with BigQuery, the number of output partfiles may vary occasionally, and I'm seeing that if we export 16 files on day 1, it'll load 16 segments.... but if we export 15 files on day 2, it'll only replace 15 segments and leave the 16th segment from day 1, leaving duplicate/bad data in the table. are there nice ways of handling this problem? My plan was to set up a task that reads the existing number of segments and manually deletes any additional segments, but there would still be some small amount of time where there would be duplicate data so if there are better ways, I'm all ears
    m
    • 2
    • 7
  • r

    Rohini Choudhary

    08/21/2025, 7:10 PM
    Hello Team, We are trying to enable the Fault-Domain-Aware Instance Assignment on our Pinot cluster version 1.3.0 For testing, we have created 3 servers, and put them in 3 separate pools 1, 2, 3 respectively. one of ther server config is as follow:
    Copy code
    "pool": {
          "DefaultTenant_REALTIME": "1"
        }
      },
      "listFields": {
        "TAG_LIST": [
          "DefaultTenant_OFFLINE",
          "DefaultTenant_REALTIME"
        ]
    But while creating the table we are getting below error:
    Copy code
    {
      "code": 500,
      "error": "Index 3 out of bounds for length 3"
    }
    Our Table config looks like below:
    Copy code
    {
      "REALTIME": {
        "tableName": "otel_spans_REALTIME",
        "tableType": "REALTIME",
        "segmentsConfig": {
          "retentionTimeUnit": "DAYS",
          "retentionTimeValue": "1",
          "segmentPushType": "APPEND",
          "timeColumnName": "startTimeUnixMilli",
          "minimizeDataMovement": false,
          "schemaName": "otel_spans",
          "replication": "3",
          "completionConfig": {
            "completionMode": "DOWNLOAD"
          }
        },
        "instanceAssignmentConfigMap": {
          "CONSUMING": {
            "partitionSelector": "FD_AWARE_INSTANCE_PARTITION_SELECTOR",
            "tagPoolConfig": {
              "tag": "DefaultTenant_REALTIME",
              "poolBased": true
            },
            "replicaGroupPartitionConfig": {
              "replicaGroupBased": true,
              "numReplicaGroups": 3
            }
          }
        },
        "tenants": {
          "broker": "DefaultTenant",
          "server": "DefaultTenant",
          "tagOverrideConfig": {}
        },
        "tableIndexConfig": {
          "rangeIndexVersion": 2,
          "loadMode": "MMAP",
          "autoGeneratedInvertedIndex": false,
          "createInvertedIndexDuringSegmentGeneration": false,
          "enableDefaultStarTree": false,
          "enableDynamicStarTreeCreation": false,
          "aggregateMetrics": false,
          "nullHandlingEnabled": false,
          "columnMajorSegmentBuilderEnabled": true,
          "optimizeDictionary": false,
          "optimizeDictionaryForMetrics": false,
          "noDictionarySizeRatioThreshold": 0.85,
          "noDictionaryColumns": [
            "traceId",
            "spanId",
            "parentSpanId",
            "resourceAttributes",
            "attributes",
            "startTimeUnixMilli",
            "endTimeUnixMilli",
            "statusMessage",
            "events"
          ],
          "invertedIndexColumns": [
            "serviceName",
            "name",
            "statusCode"
          ],
          "bloomFilterColumns": [
            "traceId"
          ],
          "onHeapDictionaryColumns": [],
          "rangeIndexColumns": [
            "duration"
          ],
          "sortedColumn": [
            "startTimeUnixMilli"
          ],
          "varLengthDictionaryColumns": []
        },
        "metadata": {},
        "quota": {},
        "routing": {
          "instanceSelectorType": "replicaGroup",
          "segmentPrunerTypes": [
            "time"
          ]
        },
        "query": {},
        "fieldConfigList": [
          {
            "name": "startTimeUnixMilli",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "endTimeUnixMilli",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "traceId",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "spanId",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "parentSpanId",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "resourceAttributes",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "events",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              }
            },
            "tierOverwrites": null
          },
          {
            "name": "attributes",
            "encodingType": "RAW",
            "indexTypes": [],
            "indexes": {
              "forward": {
                "compressionCodec": "ZSTANDARD",
                "deriveNumDocsPerChunk": false,
                "rawIndexWriterVersion": 4
              },
              "json": {
                "compressionCodec": "ZSTANDARD",
                "maxLevels": 1,
                "excludeArray": true,
                "disableCrossArrayUnnest": true,
                "includePaths": null,
                "excludePaths": null,
                "excludeFields": null,
                "indexPaths": null
              }
            },
            "tierOverwrites": null
          }
        ],
        "ingestionConfig": {
          "streamIngestionConfig": {
            "streamConfigMaps": [
              {
                "streamType": "kafka",
                "stream.kafka.topic.name": "flattened_spans",
                "stream.kafka.broker.list": "kafka:9092",
                "stream.kafka.consumer.type": "lowlevel",
                "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
                "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
                "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
                "realtime.segment.flush.threshold.rows": "0",
                "realtime.segment.flush.threshold.time": "30m",
                "realtime.segment.flush.threshold.segment.size": "300M",
                "realtime.segment.serverUploadToDeepStore": "true"
              }
            ]
          },
          "continueOnError": false,
          "rowTimeValueCheck": false,
          "segmentTimeValueCheck": true
        },
        "isDimTable": false
      }
    }
    Does anyone have any idea? The error is not pretty clear, what is wrong with schema. One more observation is that if we only apply pool based instance assignment by removing
    "partitionSelector": "FD_AWARE_INSTANCE_PARTITION_SELECTOR",
    part from the config, it works properly.
    x
    • 2
    • 2
  • m

    madhulika

    08/22/2025, 4:34 AM
    Hi Team/ @Mayank Facing issue to publish reports to tableau server. Reports gets published but once I try to view it. It keeps asking for password and does not embeds or accept it. Please help. In desktop tableau I am not facing such issue
    m
    x
    +2
    • 5
    • 43
  • b

    Balkar Lathwal

    08/25/2025, 10:20 AM
    Hey @Xiang Fu I am facing a weird issue when filtering on a multivalued field. where clause if not working correctly.
    x
    • 2
    • 39
  • m

    madhulika

    08/26/2025, 9:48 AM
    Hi @Kartik Khare Unable to publish report using start tree connector in tableau. Even if report is published. Passwords does not embed if we have added basic authentication and it keeps asking for passwords and does not accept it even provided. Always errors out
    • 1
    • 1
  • p

    prasanna

    08/26/2025, 2:13 PM
    Hi team, I have a query. we observe s3 Connection Timeout issues on customer environment based on what we see in the log it seems default no of retries is 3 and timeout is evey 2 sec. we are testing the configurations to increase this defaults. My concern is do we document somewhere in pinot docs SLA that the network should adhere to for pinot to function correctly?
    x
    • 2
    • 3
  • z

    ZEBIN KANG

    08/26/2025, 9:45 PM
    Hey team 👋 we are trying to optimize pinot's index on the time series data: we currently have a column request_ts with timestamp in seconds, so we define it as
    Copy code
    {
    		"name": "request_ts",
    		"dataType": "TIMESTAMP",
    		"format": "1:SECONDS:EPOCH",
    		"granularity": "1:SECONDS"
    	  },
    to improve the index, we also doing 1. "timeColumnName": "request_ts", 2. "rangeIndexColumns": ["request_ts"], 3. "routing": {"segmentPrunerTypes": ["time"]} 4. add request_ts to timestampConfig with granularities like ["DAY","WEEK","MONTH"] Could you please share if the operation is helpful or some of them does not improve the performance too much 🙇 cc: @Neeraja Sridharan @Sai Tarun Tadakamalla
    m
    n
    • 3
    • 20
  • m

    madhulika

    08/27/2025, 4:31 AM
    Hi @Mayank I am not able to use queryOptions=useMultiStageEngine=true in tableau server using apache startree pinot. Can some on guide me please. This works in tableau desktop but errors out in server and gives authentication error.
  • m

    madhulika

    08/28/2025, 3:53 AM
    Hi @Mayank Couple of question 1. Can we use dedup for full upsert or partial upsert secnario https://docs.pinot.apache.org/manage-data/data-import/upsert-and-dedup/upsert ? 2. Can we use a query like select * from table1 where column1 in (id1, id2, id3....id10k) if column1 has inverted index?
    m
    • 2
    • 14
  • s

    Soon

    08/28/2025, 2:25 PM
    Hello Pinot team! Just had a question on inverted index. Reading the doc on inverted index and referencing the following line from the doc:
    Copy code
    As explained in the forward index section, a column that is both sorted and equipped with a dictionary is encoded in a specialized manner that serves the purpose of implementing both forward and inverted indexes. Consequently, when these conditions are met, an inverted index is effectively created without additional configuration, even if the configuration suggests otherwise.
    We have a column that is dictionary enabled and configured as sorted in realtime table. We have also set
    autoGeneratedInvertedIndex
    and
    createInvertedIndexDuringSegmentGeneration
    as
    true
    in the table config. However we are not seeing inverted index being used in the explain plan query. Would inverted index also be configured in the table config to see it in effect?
    m
    • 2
    • 19
  • r

    raghav

    08/28/2025, 3:13 PM
    Hey Team, We’re running Apache Pinot 1.4.0 with 24 servers. On 3 servers, we consistently see errors like: •
    RuntimeException: Caught exception while running BloomFilterSegmentPruner
    (caused by
    TimeoutException
    in
    QueryMultiThreadingUtils.runTasksWithDeadline
    ) •
    RuntimeException: Caught exception while running CombinePlanNode
    (also
    TimeoutException
    ) These errors appear all the time, not just under peak load. We recently increased server RAM, but otherwise no config changes. Unfortunately, I don’t have older logs to check if this was happening before. Has anyone seen similar behavior, and what could cause it to affect only a subset of servers?
  • m

    madhulika

    08/29/2025, 4:05 AM
    Hi @Mayank Is it possible pinot single table can consume from two different region, we can pass 2 urls or 2 brokers?
    m
    • 2
    • 11
  • v

    Vatsal Agrawal

    08/29/2025, 5:28 AM
    Hi Team, We are facing an issue with MergeRollupTask in our Pinot cluster. After the task runs, the original segments are not getting deleted, and we end up with both the original and the merged segments in the table. Retention properties: left as default. Any guidance on what we might be missing would be super helpful. Adding task, table and segments related details in the thread.
    • 1
    • 2
  • d

    Deepak Padhi

    08/29/2025, 10:04 AM
    #C011C9JHN7R I am having issues with star-tree tableau connector where the additional properties does not work consistently with respect to query options when trying to set multistage option to true
  • d

    Deepak Padhi

    08/29/2025, 10:04 AM
    In tableau server it is failing to sign in where it at least connects in desktop
  • r

    Rajkumar

    08/30/2025, 6:47 PM
    Hey all, Very new to Pinot, and been trying to get a realtime table working from Confluent Kafka, Pinot doesnt like something in my config, and it times out before the table is created, the id/api keys do have access to kafka, so I am expecting something wrong with my config below, any reference/pointers is much appreciated.
    m
    • 2
    • 1
  • r

    Rajkumar

    08/30/2025, 6:47 PM
    {
    "tableName": "kafka_test_1",
    "tableType": "REALTIME",
    "tenants": {
    "broker": "DefaultTenant",
    "server": "DefaultTenant",
    "tagOverrideConfig": {}
    },
    "segmentsConfig": {
    "timeColumnName": "time",
    "replication": "1",
    "replicasPerPartition": "1",
    "retentionTimeUnit": null,
    "retentionTimeValue": null,
    "completionConfig": null,
    "crypterClassName": null,
    "peerSegmentDownloadScheme": null,
    "schemaName": "kafka_test"
    },
    "tableIndexConfig": {
    "loadMode": "MMAP",
    "invertedIndexColumns": [],
    "createInvertedIndexDuringSegmentGeneration": false,
    "rangeIndexColumns": [],
    "sortedColumn": [],
    "bloomFilterColumns": [],
    "bloomFilterConfigs": null,
    "noDictionaryColumns": [],
    "onHeapDictionaryColumns": [],
    "varLengthDictionaryColumns": [],
    "enableDefaultStarTree": false,
    "starTreeIndexConfigs": null,
    "enableDynamicStarTreeCreation": false,
    "segmentPartitionConfig": null,
    "columnMinMaxValueGeneratorMode": null,
    "aggregateMetrics": false,
    "nullHandlingEnabled": false,
    "streamConfigs": {
    "streamType": "kafka",
    "stream.kafka.topic.name": "PINOT.TEST",
    "stream.kafka.consumer.type": "lowlevel",
    "stream.kafka.broker.list": "{}",
    "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka30.KafkaConsumerFactory",
    "stream.kafka.security.protocol": "SASL_SSL",
    "stream.kafka.sasl.mechanism": "OAUTHBEARER",
    "stream.kafka.sasl.login.callback.handler.class": "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler",
    "stream.kafka.sasl.oauthbearer.token.endpoint.url": "{url}",
    "stream.kafka.sasl.jaas.config": "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId='{}' clientSecret='{}' scope='' extension_logicalCluster='{}' extension_identityPoolId='{}';",
    "stream.kafka.ssl.endpoint.identification.algorithm": "https",
    "stream.kafka.consumer.prop.group.id": "{}",
    "stream.kafka.consumer.prop.auto.offset.reset": "earliest",
    "<http://stream.kafka.consumer.prop.request.timeout.ms|stream.kafka.consumer.prop.request.timeout.ms>": "60000",
    "<http://stream.kafka.consumer.prop.metadata.max.age.ms|stream.kafka.consumer.prop.metadata.max.age.ms>": "60000",
    "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaAvroMessageDecoder",
    "stream.kafka.decoder.prop.schema.registry.url": "https://{}.westeurope.azure.confluent.cloud",
    "stream.kafka.decoder.prop.schema.registry.basic.auth.credentials.source": "USER_INFO",
    "<http://stream.kafka.decoder.prop.schema.registry.basic.auth.user.info|stream.kafka.decoder.prop.schema.registry.basic.auth.user.info>": "{key}:{secret}"
    }
    },
    "metadata": {},
    "ingestionConfig": {
    "filterConfig": null,
    "transformConfigs": null
    },
    "quota": {
    "storage": null,
    "maxQueriesPerSecond": null
    },
    "task": null,
    "routing": {
    "segmentPrunerTypes": null,
    "instanceSelectorType": null
    },
    "query": {
    "timeoutMs": null
    },
    "fieldConfigList": null,
    "upsertConfig": null,
    "tierConfigs": null
    }
  • r

    Rajkumar

    09/01/2025, 10:53 AM
    Just to give an update, this was resolved with the below config
  • r

    Rajkumar

    09/01/2025, 10:55 AM
    "streamType": "kafka",
    "stream.kafka.topic.name": "asdas",
    "stream.kafka.consumer.type": "lowlevel",
    "stream.kafka.broker.list": "asasds.westeurope.azure.confluent.cloud:9092",
    "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
    "security.protocol": "SASL_SSL",
    "sasl.mechanism": "PLAIN",
    "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"\" password=\"\";",
    "ssl.endpoint.identification.algorithm": "https",
    "auto.offset.reset": "earliest",
    "<http://stream.kafka.consumer.prop.request.timeout.ms|stream.kafka.consumer.prop.request.timeout.ms>": "60000",
    "<http://stream.kafka.consumer.prop.metadata.max.age.ms|stream.kafka.consumer.prop.metadata.max.age.ms>": "60000",
    "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder",
    "stream.kafka.decoder.prop.schema.registry.rest.url": "<https://dasdsa.westeurope.azure.confluent.cloud>",
    "stream.kafka.decoder.prop.schema.registry.basic.auth.credentials.source": "USER_INFO",
    "<http://stream.kafka.decoder.prop.schema.registry.basic.auth.user.info|stream.kafka.decoder.prop.schema.registry.basic.auth.user.info>": ":",
    "stream.kafka.decoder.prop.schema.registry.schema.name": "KsqlDataSourceSchema",
    "stream.kafka.decoder.prop.format": "AVRO"
  • m

    Mayank

    09/01/2025, 12:03 PM
    Thanks @Rajkumar for confirming
    👍 1
  • n

    Naveen

    09/02/2025, 3:37 PM
    "task": { "taskTypeConfigsMap": { "MergeRollupTask": { "1hour.mergeType": "rollup", "1hour.bucketTimePeriod": "1h", "1hour.bufferTimePeriod": "3h", "1hour.maxNumRecordsPerSegment": "1000000", "1hour.maxNumRecordsPerTask": "5000000", "1hour.maxNumParallelBuckets": "5", "1day.mergeType": "rollup", "1day.bucketTimePeriod": "1d", "1day.bufferTimePeriod": "1d", "1day.roundBucketTimePeriod": "1d", "1day.maxNumRecordsPerSegment": "1000000", "1day.maxNumRecordsPerTask": "5000000", "metric2.aggregationType": "avg", "metric.aggregationType": "avg", "metric3.aggregationType": "avg", "metric4.aggregationType": "avg", "scores_sc.aggregationType": "avg" } } } based on the docs, I understood avg is not supported during aggregation is my understand is correct or we can do average as well. is my above job is correct or not.
  • r

    Rajkumar

    09/02/2025, 4:48 PM
    Hi All, what's a neat way of extracting strings from a concatenated sting. Example 'Raj|25|Male' should go to three fields Name, Age, Gender.
    m
    • 2
    • 4
  • r

    Rajkumar

    09/02/2025, 4:48 PM
    I tried to do the below, but doesnt work
    split(PEXP_DEAL_KEY, '|', 1)